To Appear in SIAM J . on Optimization INCREMENTAL SUBGRADIENT METHODS 1 FOR NONDIFFERENTIABLE OPTIMIZATION
نویسنده
چکیده
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved. 1 Research supported by NSF under Grant ACI-9873339. 2 Dept. of Electrical Engineering and Computer Science, M.I.T., Cambridge, Mass., 02139.
منابع مشابه
Incremental Subgradient Methods for Nondifferentiable Optimization
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradien...
متن کاملA Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
We present an algorithm that generalizes the randomized incremental subgradient method with fixed stepsize due to Nedić and Bertsekas [SIAM J. Optim., 12 (2001), pp. 109–138]. Our novel algorithm is particularly suitable for distributed implementation and execution, and possible applications include distributed optimization, e.g., parameter estimation in networks of tiny wireless sensors. The s...
متن کاملIncremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
We present a unifying framework for nonsmooth convex minimization bringing together -subgradient algorithms and methods for the convex feasibility problem. This development is a natural step for -subgradient methods in the direction of constrained optimization since the Euclidean projection frequently required in such methods is replaced by an approximate projection, which is often easier to co...
متن کاملApproximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
In this paper, we study methods for generating approximate primal solutions as a by-product of subgradient methods applied to the Lagrangian dual of a primal convex (possibly nondifferentiable) constrained optimization problem. Our work is motivated by constrained primal problems with a favorable dual problem structure that leads to efficient implementation of dual subgradient methods, such as ...
متن کاملIncremental Subgradient Methods 1 for Nondifferentiable Optimization
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradien...
متن کامل